-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improvements and Bug Fixes for Probabilistic Fairness #27
Conversation
…ogate column not named 'surrogate'. Update unit tests to fix error.
Merge update that fixes bug in summarizer that inserts missing values when membership dataframe surrogate column is not named surrogates.
Keep update_simulation up-to-date with proba_membership_updates. It should include all of that branch's fixes.
…imensional numpy array in EqualizedOdds bias mitigation. Convert to float when necessary.
…s will still print during unit tests because higher-level API will not have the option to turn warnings off. This keeps the API cleaner.
… experiment with min counts per surrogate.
… unit tests do not fail.
…put charts from results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review comments added
tests/test_mitigation_binary.py
Outdated
n2p_prob_0 = n2p_prob_0.item() | ||
p2p_prob_1 = p2p_prob_1.item() | ||
n2p_prob_1 = n2p_prob_1.item() | ||
p2p_prob_0 = p2p_prob_0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are theese lines needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method returns a null if the values come back empty. Otherwise, you get an error when you try to access.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm.. I guess I am not seeing something. Let's take an example
p2p_prob_0 = p2p_prob_0
what does this line do? setting x=x seems redundant to me, but may be there is sth subtle i am missing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removing these
tests/test_mitigation_binary.py
Outdated
n2p_prob_0 = n2p_prob_0.item() | ||
p2p_prob_1 = p2p_prob_1.item() | ||
n2p_prob_1 = n2p_prob_1.item() | ||
p2p_prob_0 = p2p_prob_0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are these lines needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removing these
jurity/utils_proba.py
Outdated
@@ -992,12 +992,13 @@ def make_summary_data(self, perf_df: pd.DataFrame, surrogate_df: pd.DataFrame = | |||
self.check_surrogate_data(surrogate_df) | |||
merged_data = perf_df.merge(surrogate_df, left_on=self.surrogate_perf_col_name(), | |||
right_on=self.surrogate_surrogate_col_name()) | |||
self.check_merged_data(merged_data, perf_df) | |||
self.check_merged_data(merged_data, perf_df,warnings) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
space after comma ","
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
jurity/fairness/__init__.py
Outdated
@@ -147,7 +147,7 @@ def _get_score_logic(metric, name, | |||
else: | |||
if name == "StatisticalParity": | |||
score = metric.get_score(predictions, memberships, surrogates, membership_labels, bootstrap_results) | |||
elif name in ["AverageOdds", "EqualOpportunity", "FNRDifference", "PredictiveEquality"]: | |||
elif name in ["AverageOdds", "EqualOpportunity", "FNRDifference", "PredictiveEquality","EqualOpportunity"]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this list already has EqualOpportunity, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
quick q on the "data". Where is this data coming from? Is there an original version that we borrow from somewhere else (hence, copyright?) OR .. this data is generated by us?
@@ -0,0 +1,59 @@ | |||
# Probabilistic Fairness Demonstration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am still not sure about adding data/code from the paper into the library code here.
Particularly because, a general user of the library, when they do a pip install to use Jurity, is not interested in downloading/copying these. It also bloats the library size.
Here are two other options we can consider:
-
Merge the PR here, without the examples folder. Create a feature branch, named sth like probablistic_fairness_data, and add this "examples" folder in that data branch. We don't have to merge the data branch, it can always stay there. That is, anyone can access to it, download, repeat, re-run experiments. But we don't merge it to the main library.
-
Upload paper data/code to HuggingFace. See here https://huggingface.co/datasets/skadio/optimized_item_selection
In that case, we added the optimized item selection data from a published paper as HF dataset. It uses the Seq2Pat library (akin to Jurity) here. I can upload there if you want OR you can do the same under your account.
Thoughts?
Probabilistic fairness, its accuracy, and the simulation method used in | ||
these demonstrations are detailed in | ||
<a href="https://doi.org/10.1007/978-3-031-44505-7_29">" | ||
Surrogate Membership for Inferred Metrics in Fairness Evaluation"</a> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add a word about what Table in the paper comes from what steps below?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good to "align" tables from the paper to the code here so that one looks at the paper vs. the folder here, the connection between the two reveals itself.
jurity/fairness/__init__.py
Outdated
@@ -147,7 +147,7 @@ def _get_score_logic(metric, name, | |||
else: | |||
if name == "StatisticalParity": | |||
score = metric.get_score(predictions, memberships, surrogates, membership_labels, bootstrap_results) | |||
elif name in ["AverageOdds", "EqualOpportunity", "FNRDifference", "PredictiveEquality"]: | |||
elif name in ["AverageOdds", "EqualOpportunity", "FNRDifference", "PredictiveEquality","EqualOpportunity"]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
jurity/utils_proba.py
Outdated
@@ -791,7 +791,7 @@ def summarize(cls, | |||
likes_df.columns = membership_names | |||
likes_df = likes_df.reset_index() | |||
summarizer = cls("surrogates", "surrogates", "predictions", true_name=label_name, test_names=test_names) | |||
return summarizer.make_summary_data(perf_df=df, surrogate_df=likes_df) | |||
return summarizer.make_summary_data(perf_df=df, surrogate_df=likes_df,warnings=warnings) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
jurity/utils_proba.py
Outdated
@@ -981,7 +981,7 @@ def check_surrogate_confusion_matrix(self, confusion_df, merged_df): | |||
# return False | |||
return True | |||
|
|||
def make_summary_data(self, perf_df: pd.DataFrame, surrogate_df: pd.DataFrame = None): | |||
def make_summary_data(self, perf_df: pd.DataFrame, surrogate_df: pd.DataFrame = None,warnings=True): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
jurity/utils_proba.py
Outdated
@@ -992,12 +992,13 @@ def make_summary_data(self, perf_df: pd.DataFrame, surrogate_df: pd.DataFrame = | |||
self.check_surrogate_data(surrogate_df) | |||
merged_data = perf_df.merge(surrogate_df, left_on=self.surrogate_perf_col_name(), | |||
right_on=self.surrogate_surrogate_col_name()) | |||
self.check_merged_data(merged_data, perf_df) | |||
self.check_merged_data(merged_data, perf_df,warnings) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
tests/test_mitigation_binary.py
Outdated
n2p_prob_0 = n2p_prob_0.item() | ||
p2p_prob_1 = p2p_prob_1.item() | ||
n2p_prob_1 = n2p_prob_1.item() | ||
p2p_prob_0 = p2p_prob_0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removing these
What
The primary purpose of this PR is to adds a path for probabilistic fairness to BinaryFairnessmetrics.get_all_scores(). Other small updates include changing the default minimum weight value to 5 instead of 30 (this should improve results for small samples, based on results from simulations).
Other Features
A new class called UtilsProbaSimulator has been added to test_utils_proba.py. This simplifies unit tests for probabilistic fairness accuracy and enables other users to reproduce results from simulations in academic papers related to probabilistic fairness.
Why
BinaryFairnessMetrics.get_all_scores is the primary path that users will use to access fairness metrics. This makes probabilistic fairness more accessible to Jurity's audience. Adding easy simulation capabilities to test procedures will enable other users to contribute more easily.
How
get_all_scores follows the same rules as Metric.get_scores() in terms of deciding if the user is asking for probabilistic fairness or deterministic fairness. The rest is already handled by downstream processes implemented in Jurity 2.0.0.
tests/test_utils_proba.py has a new class called UtilsProbSimulator, which takes a dictionary of input unfairness characteristics and contains classes for generating simulated data from an input surrogate class.